37 research outputs found

    Fast and accurate classification of echocardiograms using deep learning

    Get PDF
    Echocardiography is essential to modern cardiology. However, human interpretation limits high throughput analysis, limiting echocardiography from reaching its full clinical and research potential for precision medicine. Deep learning is a cutting-edge machine-learning technique that has been useful in analyzing medical images but has not yet been widely applied to echocardiography, partly due to the complexity of echocardiograms' multi view, multi modality format. The essential first step toward comprehensive computer assisted echocardiographic interpretation is determining whether computers can learn to recognize standard views. To this end, we anonymized 834,267 transthoracic echocardiogram (TTE) images from 267 patients (20 to 96 years, 51 percent female, 26 percent obese) seen between 2000 and 2017 and labeled them according to standard views. Images covered a range of real world clinical variation. We built a multilayer convolutional neural network and used supervised learning to simultaneously classify 15 standard views. Eighty percent of data used was randomly chosen for training and 20 percent reserved for validation and testing on never seen echocardiograms. Using multiple images from each clip, the model classified among 12 video views with 97.8 percent overall test accuracy without overfitting. Even on single low resolution images, test accuracy among 15 views was 91.7 percent versus 70.2 to 83.5 percent for board-certified echocardiographers. Confusional matrices, occlusion experiments, and saliency mapping showed that the model finds recognizable similarities among related views and classifies using clinically relevant image features. In conclusion, deep neural networks can classify essential echocardiographic views simultaneously and with high accuracy. Our results provide a foundation for more complex deep learning assisted echocardiographic interpretation.Comment: 31 pages, 8 figure

    Customized care 2020: how medical sequencing and network biology will enable personalized medicine

    Get PDF
    Applications of next-generation nucleic acid sequencing technologies will lead to the development of precision diagnostics that will, in turn, be a major technology enabler of precision medicine. Terabyte-scale, multidimensional data sets derived using these technologies will be used to reverse engineer the specific disease networks that underlie individual patients’ conditions. Modeling and simulation of these networks in the presence of virtual drugs, and combinations of drugs, will identify the most efficacious therapy for precision medicine and customized care. In coming years the practice of medicine will routinely employ network biology analytics supported by high-performance supercomputing

    Advantages and Limitations of Anticipating Laboratory Test Results from Regression- and Tree-Based Rules Derived from Electronic Health-Record Data

    Get PDF
    Laboratory testing is the single highest-volume medical activity, making it useful to ask how well one can anticipate whether a given test result will be high, low, or within the reference interval (“normal”). We analyzed 10 years of electronic health records—a total of 69.4 million blood tests—to see how well standard rule-mining techniques can anticipate test results based on patient age and gender, recent diagnoses, and recent laboratory test results. We evaluated rules according to their positive and negative predictive value (PPV and NPV) and area under the receiver-operator characteristic curve (ROC AUCs). Using a stringent cutoff of PPV and/or NPV≥0.95, standard techniques yield few rules for sendout tests but several for in-house tests, mostly for repeat laboratory tests that are part of the complete blood count and basic metabolic panel. Most rules were clinically and pathophysiologically plausible, and several seemed clinically useful for informing pre-test probability of a given result. But overall, rules were unlikely to be able to function as a general substitute for actually ordering a test. Improving laboratory utilization will likely require different input data and/or alternative methods

    The Future of Blood Testing Is the Immunome

    Full text link
    It is increasingly clear that an extraordinarily diverse range of clinically important conditions-including infections, vaccinations, autoimmune diseases, transplants, transfusion reactions, aging, and cancers-leave telltale signatures in the millions of V(D)J-rearranged antibody and T cell receptor [TR per the Human Genome Organization (HUGO) nomenclature but more commonly known as TCR] genes collectively expressed by a person's B cells (antibodies) and T cells. We refer to these as the immunome. Because of its diversity and complexity, the immunome provides singular opportunities for advancing personalized medicine by serving as the substrate for a highly multiplexed, near-universal blood test. Here we discuss some of these opportunities, the current state of immunome-based diagnostics, and highlight some of the challenges involved. We conclude with a call to clinicians, researchers, and others to join efforts with the Adaptive Immune Receptor Repertoire Community (AIRR-C) to realize the diagnostic potential of the immunome

    The Landscape of Inappropriate Laboratory Testing: A 15-Year Meta-Analysis

    Get PDF
    Background: Laboratory testing is the single highest-volume medical activity and drives clinical decision-making across medicine. However, the overall landscape of inappropriate testing, which is thought to be dominated by repeat testing, is unclear. Systematic differences in initial vs. repeat testing, measurement criteria, and other factors would suggest new priorities for improving laboratory testing. Methods: A multi-database systematic review was performed on published studies from 1997–2012 using strict inclusion and exclusion criteria. Over- vs. underutilization, initial vs. repeat testing, low- vs. high-volume testing, subjective vs. objective appropriateness criteria, and restrictive vs. permissive appropriateness criteria, among other factors, were assessed. Results: Overall mean rates of over- and underutilization were 20.6% (95% CI 16.2–24.9%) and 44.8% (95% CI 33.8–55.8%). Overutilization during initial testing (43.9%; 95% CI 35.4–52.5%) was six times higher than during repeat testing (7.4%; 95% CI 2.5–12.3%; P for stratum difference <0.001). Overutilization of low-volume tests (32.2%; 95% CI 25.0–39.4%) was three times that of high-volume tests (10.2%; 95% CI 2.6–17.7%; P<0.001). Overutilization measured according to restrictive criteria (44.2%; 95% CI 36.8–51.6%) was three times higher than for permissive criteria (12.0%; 95% CI 8.0–16.0%; P<0.001). Overutilization measured using subjective criteria (29.0%; 95% CI 21.9–36.1%) was nearly twice as high as for objective criteria (16.1%; 95% CI 11.0–21.2%; P = 0.004). Together, these factors explained over half (54%) of the overall variability in overutilization. There were no statistically significant differences between studies from the United States vs. elsewhere (P = 0.38) or among chemistry, hematology, microbiology, and molecular tests (P = 0.05–0.65) and no robust statistically significant trends over time. Conclusions: The landscape of overutilization varies systematically by clinical setting (initial vs. repeat), test volume, and measurement criteria. Underutilization is also widespread, but understudied. Expanding the current focus on reducing repeat testing to include ordering the right test during initial evaluation may lead to fewer errors and better care

    Chromosome Conformation Capture Carbon Copy (5C): a massively parallel solution for mapping interactions between genomic elements

    Get PDF
    Physical interactions between genetic elements located throughout the genome play important roles in gene regulation and can be identified with the Chromosome Conformation Capture (3C) methodology. 3C converts physical chromatin interactions into specific ligation products, which are quantified individually by PCR. Here we present a high-throughput 3C approach, 3C-Carbon Copy (5C), that employs microarrays or quantitative DNA sequencing using 454-technology as detection methods. We applied 5C to analyze a 400-kb region containing the human beta-globin locus and a 100-kb conserved gene desert region. We validated 5C by detection of several previously identified looping interactions in the beta-globin locus. We also identified a new looping interaction in K562 cells between the beta-globin Locus Control Region and the gamma-beta-globin intergenic region. Interestingly, this region has been implicated in the control of developmental globin gene switching. 5C should be widely applicable for large-scale mapping of cis- and trans- interaction networks of genomic elements and for the study of higher-order chromosome structure
    corecore